Autonomous vehicles and people

Autonomous vehicles are supposed to be safer because they strictly follow rules, but in the human world, rules are there to be broken

How does the traffic on the streets work? There are rules that most drivers more or less follow, but sometimes break when they think it’s necessary. Sometimes you drive quickly across the crosswalk even if a pedestrian is about to enter it, you drive across the intersection when the light is yellow in order not to have to brake sharply, or you cut off a cyclist or another car while entering the intersection. The speed limits are followed laxly, especially in 50- or 30-km zones. And then there are the mania, distractions, moods and risk-taking. Everything ensures that the traffic runs amazingly well, but there are still accidents.

These were largely avoidable, according to the advocates of autonomous vehicles, if not wetware with all its differences and error rates, but optimized software takes over the wheel. There will be no more laziness and lack of attention, the rules will be perfectly observed, accidents can be drastically reduced. However, only if the various technical control systems are compatible and always updated, do not contain programming errors, are not hacked, do not crash and react immediately if a subsystem such as wheels, turn signals, brakes, cameras, etc., fails. are damaged or break down. And if they are able to incorporate the driving and reaction of humans, as long as some are still (allowed to) steer their car. Problems can arise when there are masses of autonomous vehicles on the road (Asimov’s rules for the asphalt jungle).

In addition, following the rules is already necessary in many cases, whereby it must be solved in detail how a rule, for example right before left, is to be adhered to, i.e. how fast to approach an intersection, at what distance from another car to continue, whether the right of way can be perceived in any case, and what to do if there is no solution, i.e. at an intersection of three streets there are three cars. How do the autonomous vehicles coordinate with each other, especially if one car is still controlled by a human?? How long to wait so as not to make a mistake, how fast to slow down and take into account the reaction of the following vehicles?

Of autonomous vehicles and humans

Google autonomous vehicle. Image: Sam Churchill/CC-BY-SA-2.0

As the New York Times reports, that very thing became a problem with Google’s autonomous vehicles, that they don’t make enough mistakes to not only take risks, but also to maintain the flow of traffic by. So last month, a Google vehicle obeyed the rule in a driver’s errand and hit its "Safety Co-Driver" signaled to slam on the brakes to let a motorist pass. That was fine, but there was a human-driven car in the back that promptly crashed into the Google vehicle because the driver didn’t anticipate the braking. That’s a human driver’s mistake, an accident occurred anyway.

There was already a problem with an autonomous vehicle failing to continue at an intersection because the other cars, driven by humans, did not stop but continued slowly. Das ist fur einen strengen Regelbefolger schwierig, wenn er nicht seine Mitmenschen kennt und etwa Blicke oder Gesten austauscht. Jetzt sollen die Google-Fahrzeuge auch langsam weiterrollen und die anderen Autos beobachten.

In this case, if the car had been left to operate autonomously, it would have stopped safely in front of the crosswalk – great. Intriguingly, though, it would have braked slightly less hard and traveled a bit closer to the crosswalk before stopping. In other words, our software might have created some extra margin in a situation where fractions of inches and seconds mattered.There’s no way to know this would have protected us from a collision; if someone’s driving too close, they’re still very likely to hit us. Our driver was 100% correct in hitting the brakes. But this situation highlights what computers are good at. Our software could do the math on many complicated factors all at once – pedestrian speed and trajectory, our speed and trajectory, the other vehicle’s speed and trajectory. And then it could make an extremely nuanced braking calculation and implement a very controlled response, all very quickly.

According to Google, a traffic world without human drivers would be safer.

For the programmers, the transition from human drivers to traffic with only controlled, overly cautious autonomous vehicles is the biggest problem, the NYT writes. But this is of course absurd. Even if there were only autonomous vehicles and humans were no longer allowed to be drivers, there would still be humans driving or riding bicycles, unless it also became smart, in which case the physical movement would be missing, which would become more important the more automated all processes become. With autonomous vehicles, we can eventually save ourselves a trip to the parking lot, as they will find their own space and pick up the passenger.

"The real problem is that the vehicle is too safe", says the director of the Design Lab at the University of California, San Diego. "You have to learn to be aggressive at the right moment, and the right moment depends on the culture." That’s right, that’s another thing, there’s a difference if you’re in Denmark, Germany, Italy or Turkey. This will play a role as long as human drivers are still on the road with their cars at the same time, but it will also play a role when autonomous vehicles do not drive fully autonomously, but when the human passenger can still intervene, who sometimes quickly presses on the brake or the gas pedal pedal. That’s the problem with the human-machine system, which is still mandatory today. According to Google, in all the accidents that have occurred so far during test drives, mostly rear-end collisions, a human was always responsible. But had the autonomous driving autonomous vehicle braked more cautiously before the driver left the vehicle, so that a rear-end collision would have been avoided?. Even for Google, this is impossible to answer, although one has to rely on reliability here.

It is interesting to note that a human-machine system, as is the case with many driver assistance systems. Reference is made to a study by an insurance company, according to which vehicles with a lane departure warning system had a slightly higher accident rate than vehicles without such a system. It is quite possible that drivers are even more annoyed by the rules and, above all, do not want to get into the role of being at the wheel under the control of a machine that sometimes interprets behavior incorrectly or even blob rule-like.

By Florian Rotzer just published by Westendverlag: Smart Cities in Cyberwar.